22 research outputs found

    ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 - JCT3V-C0032: A human visual system based 3D video quality metric

    Full text link
    This contribution proposes a full-reference Human-Visual-System based 3D video quality metric. In this report, the presented metric is used to evaluate the quality of compressed stereo pair formed from a decoded view and a synthesized view. The performance of the proposed metric is verified through a series of subjective tests and compared with that of PSNR, SSIM, MS-SSIM, VIFp, and VQM metrics. The experimental results show that HV3D has the highest correlation with Mean Opinion Scores (MOS) compared to other tested metrics.Comment: arXiv admin note: substantial text overlap with arXiv:1803.04624, arXiv:1803.0462

    Effect of High Frame Rates on 3D Video Quality of Experience

    Full text link
    In this paper, we study the effect of 3D videos with increased frame rates on the viewers quality of experience. We performed a series of subjective tests to seek the subjects preferences among videos of the same scene at four different frame rates: 24, 30, 48, and 60 frames per second (fps). Results revealed that subjects clearly prefer higher frame rates. In particular, Mean Opinion Score (MOS) values associated with the 60 fps 3D videos were 55% greater than MOS values of the 24 fps 3D videos

    3D Video Quality Metric for 3D Video Compression

    Full text link
    As the evolution of multiview display technology is bringing glasses-free 3DTV closer to reality, MPEG and VCEG are preparing an extension to HEVC to encode multiview video content. View synthesis in the current version of the 3D video codec is performed using PSNR as a quality metric measure. In this paper, we propose a full- reference Human-Visual-System based 3D video quality metric to be used in multiview encoding as an alternative to PSNR. Performance of our metric is tested in a 2-view case scenario. The quality of the compressed stereo pair, formed from a decoded view and a synthesized view, is evaluated at the encoder side. The performance is verified through a series of subjective tests and compared with that of PSNR, SSIM, MS-SSIM, VIFp, and VQM metrics. Experimental results showed that our 3D quality metric has the highest correlation with Mean Opinion Scores (MOS) compared to the other tested metrics

    An Efficient Human Visual System Based Quality Metric for 3D Video

    Full text link
    Stereoscopic video technologies have been introduced to the consumer market in the past few years. A key factor in designing a 3D system is to understand how different visual cues and distortions affect the perceptual quality of stereoscopic video. The ultimate way to assess 3D video quality is through subjective tests. However, subjective evaluation is time consuming, expensive, and in some cases not possible. The other solution is developing objective quality metrics, which attempt to model the Human Visual System (HVS) in order to assess perceptual quality. Although several 2D quality metrics have been proposed for still images and videos, in the case of 3D efforts are only at the initial stages. In this paper, we propose a new full-reference quality metric for 3D content. Our method mimics HVS by fusing information of both the left and right views to construct the cyclopean view, as well as taking to account the sensitivity of HVS to contrast and the disparity of the views. In addition, a temporal pooling strategy is utilized to address the effect of temporal variations of the quality in the video. Performance evaluations showed that our 3D quality metric quantifies quality degradation caused by several representative types of distortions very accurately, with Pearson correlation coefficient of 90.8 %, a competitive performance compared to the state-of-the-art 3D quality metrics

    3D Video Quality Metric for Mobile Applications

    Full text link
    In this paper, we propose a new full-reference quality metric for mobile 3D content. Our method is modeled around the Human Visual System, fusing the information of both left and right channels, considering color components, the cyclopean views of the two videos and disparity. Our method is assessing the quality of 3D videos displayed on a mobile 3DTV, taking into account the effect of resolution, distance from the viewers eyes, and dimensions of the mobile display. Performance evaluations showed that our mobile 3D quality metric monitors the degradation of quality caused by several representative types of distortion with 82 percent correlation with results of subjective tests, an accuracy much better than that of the state of the art mobile 3D quality metric.Comment: arXiv admin note: substantial text overlap with arXiv:1803.04624; text overlap with arXiv:1803.04832 and arXiv:1803.0483

    Benchmark 3D eye-tracking dataset for visual saliency prediction on stereoscopic 3D video

    Full text link
    Visual Attention Models (VAMs) predict the location of an image or video regions that are most likely to attract human attention. Although saliency detection is well explored for 2D image and video content, there are only few attempts made to design 3D saliency prediction models. Newly proposed 3D visual attention models have to be validated over large-scale video saliency prediction datasets, which also contain results of eye-tracking information. There are several publicly available eye-tracking datasets for 2D image and video content. In the case of 3D, however, there is still a need for large-scale video saliency datasets for the research community for validating different 3D-VAMs. In this paper, we introduce a large-scale dataset containing eye-tracking data collected from 61 stereoscopic 3D videos (and also 2D versions of those) and 24 subjects participated in a free-viewing test. We evaluate the performance of the existing saliency detection methods over the proposed dataset. In addition, we created an online benchmark for validating the performance of the existing 2D and 3D visual attention models and facilitate addition of new VAMs to the benchmark. Our benchmark currently contains 50 different VAMs

    A Learning-Based Visual Saliency Fusion Model for High Dynamic Range Video (LBVS-HDR)

    Full text link
    Saliency prediction for Standard Dynamic Range (SDR) videos has been well explored in the last decade. However, limited studies are available on High Dynamic Range (HDR) Visual Attention Models (VAMs). Considering that the characteristic of HDR content in terms of dynamic range and color gamut is quite different than those of SDR content, it is essential to identify the importance of different saliency attributes of HDR videos for designing a VAM and understand how to combine these features. To this end we propose a learning-based visual saliency fusion method for HDR content (LVBS-HDR) to combine various visual saliency features. In our approach various conspicuity maps are extracted from HDR data, and then for fusing conspicuity maps, a Random Forests algorithm is used to train a model based on the collected data from an eye-tracking experiment. Performance evaluations demonstrate the superiority of the proposed fusion method against other existing fusion methods

    Compression of High Dynamic Range Video Using the HEVC and H.264/AVC Standards

    Full text link
    The existing video coding standards such as H.264/AVC and High Efficiency Video Coding (HEVC) have been designed based on the statistical properties of Low Dynamic Range (LDR) videos and are not accustomed to the characteristics of High Dynamic Range (HDR) content. In this study, we investigate the performance of the latest LDR video compression standard, HEVC, as well as the recent widely commercially used video compression standard, H.264/AVC, on HDR content. Subjective evaluations of results on an HDR display show that viewers clearly prefer the videos coded via an HEVC-based encoder to the ones encoded using an H.264/AVC encoder. In particular, HEVC outperforms H.264/AVC by an average of 10.18% in terms of mean opinion score and 25.08% in terms of bit rate savings

    Evaluating the Performance of Existing Full-Reference Quality Metrics on High Dynamic Range (HDR) Video Content

    Full text link
    While there exists a wide variety of Low Dynamic Range (LDR) quality metrics, only a limited number of metrics are designed specifically for the High Dynamic Range (HDR) content. With the introduction of HDR video compression standardization effort by international standardization bodies, the need for an efficient video quality metric for HDR applications has become more pronounced. The objective of this study is to compare the performance of the existing full-reference LDR and HDR video quality metrics on HDR content and identify the most effective one for HDR applications. To this end, a new HDR video dataset is created, which consists of representative indoor and outdoor video sequences with different brightness, motion levels and different representing types of distortions. The quality of each distorted video in this dataset is evaluated both subjectively and objectively. The correlation between the subjective and objective results confirm that VIF quality metric outperforms all to ther tested metrics in the presence of the tested types of distortions

    A Human Visual System-Based 3D Video Quality Metric

    Full text link
    Although several 2D quality metrics have been proposed for images and videos, in the case of 3D efforts are only at the initial stages. In this paper, we propose a new full-reference quality metric for 3D content. Our method is modeled around the HVS, fusing the information of both left and right channels, considering color components, the cyclopean views of the two videos and disparity. Performance evaluations showed that our 3D quality metric successfully monitors the degradation of quality caused by several representative types of distortion and it has 86% correlation with the results of subjective evaluations.Comment: IC3D, 201
    corecore